我们通过随时间变化的因素负载开发了受惩罚的两次通用回归。第一遍中的惩罚对时间变化驱动因素强加了稀疏性,同时还通过正规化适当的系数组来维持与无契约限制的兼容性。第二次通过提供了风险溢价估计,以预测股权超额回报。我们的蒙特卡洛结果以及我们对大量横断面数据集的个人股票集的经验结果表明,如果不进行分组的惩罚可能会屈服于几乎所有估计的时变模型,违反了无标准限制。此外,我们的结果表明,与惩罚方法相比,所提出的方法在没有适当分组或时间不变的因子模型的情况下减少了预测错误。
translated by 谷歌翻译
大多数机器学习方法和算法给出了预测性能的高优先级,这可能并不总是对应于用户的优先级。在许多情况下,从工程到遗传学的不同领域的从业者和研究人员都需要尤其是在例如并非所有属性可用的环境中的结果的解释和可重复性。因此,需要使机器学习算法的输出更加解释,并提供用户可以根据属性可用性选择的“等价”学习者(在预测性能方面)来进行测试和/或利用这些学习者以获取预测/诊断目的。为了解决这些需求,我们建议研究一个组合筛选和包装方法方法的过程,这些过程基于用户指定的学习方法,贪婪地探讨了属性空间,以找到稀疏的学习者库,随后的低数据收集和存储成本。这种新方法(i)提供了可以容易解释的低维网络,并且(ii)基于具有相同预测功率的强大学习者的属性组合的多样性提高结果的潜在可重量。我们称这种算法“稀疏包装算法”(SWAG)。
translated by 谷歌翻译
This report summarizes the work carried out by the authors during the Twelfth Montreal Industrial Problem Solving Workshop, held at Universit\'e de Montr\'eal in August 2022. The team tackled a problem submitted by CBC/Radio-Canada on the theme of Automatic Text Simplification (ATS).
translated by 谷歌翻译
Wearable sensors for measuring head kinematics can be noisy due to imperfect interfaces with the body. Mouthguards are used to measure head kinematics during impacts in traumatic brain injury (TBI) studies, but deviations from reference kinematics can still occur due to potential looseness. In this study, deep learning is used to compensate for the imperfect interface and improve measurement accuracy. A set of one-dimensional convolutional neural network (1D-CNN) models was developed to denoise mouthguard kinematics measurements along three spatial axes of linear acceleration and angular velocity. The denoised kinematics had significantly reduced errors compared to reference kinematics, and reduced errors in brain injury criteria and tissue strain and strain rate calculated via finite element modeling. The 1D-CNN models were also tested on an on-field dataset of college football impacts and a post-mortem human subject dataset, with similar denoising effects observed. The models can be used to improve detection of head impacts and TBI risk evaluation, and potentially extended to other sensors measuring kinematics.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Scene understanding is a major challenge of today's computer vision. Center to this task is image segmentation, since scenes are often provided as a set of pictures. Nowadays, many such datasets also provide 3D geometry information given as a 3D point cloud acquired by a laser scanner or a depth camera. To exploit this geometric information, many current approaches rely on both a 2D loss and 3D loss, requiring not only 2D per pixel labels but also 3D per point labels. However obtaining a 3D groundtruth is challenging, time-consuming and error-prone. In this paper, we show that image segmentation can benefit from 3D geometric information without requiring any 3D groundtruth, by training the geometric feature extraction with a 2D segmentation loss in an end-to-end fashion. Our method starts by extracting a map of 3D features directly from the point cloud by using a lightweight and simple 3D encoder neural network. The 3D feature map is then used as an additional input to a classical image segmentation network. During training, the 3D features extraction is optimized for the segmentation task by back-propagation through the entire pipeline. Our method exhibits state-of-the-art performance with much lighter input dataset requirements, since no 3D groundtruth is required.
translated by 谷歌翻译
An eco-system of agents each having their own policy with some, but limited, generalizability has proven to be a reliable approach to increase generalization across procedurally generated environments. In such an approach, new agents are regularly added to the eco-system when encountering a new environment that is outside of the scope of the eco-system. The speed of adaptation and general effectiveness of the eco-system approach highly depends on the initialization of new agents. In this paper we propose different techniques for such initialization and study their impact. We then rework the ecosystem setup to use forked agents which brings better results than the initial eco-system approach with a drastically reduced number of training cycles.
translated by 谷歌翻译
We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP). We first handle the setting of binary classification and then extend our rule to the more general setting of density estimation (with respect to the total variation metric). The existence of a universally consistent DP learner reveals a stark difference with the distribution-free PAC model. Indeed, in the latter DP learning is extremely limited: even one-dimensional linear classifiers are not privately learnable in this stringent model. Our result thus demonstrates that by allowing the learning rate to depend on the target distribution, one can circumvent the above-mentioned impossibility result and in fact, learn \emph{arbitrary} distributions by a single DP algorithm. As an application, we prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal \emph{labeled} sample complexity of $\tilde{O}(d/\varepsilon)$ labeled examples (and with an unlabeled sample complexity that can depend on the target distribution).
translated by 谷歌翻译
There is a growing interest in the use of reduced-precision arithmetic, exacerbated by the recent interest in artificial intelligence, especially with deep learning. Most architectures already provide reduced-precision capabilities (e.g., 8-bit integer, 16-bit floating point). In the context of FPGAs, any number format and bit-width can even be considered.In computer arithmetic, the representation of real numbers is a major issue. Fixed-point (FxP) and floating-point (FlP) are the main options to represent reals, both with their advantages and drawbacks. This chapter presents both FxP and FlP number representations, and draws a fair a comparison between their cost, performance and energy, as well as their impact on accuracy during computations.It is shown that the choice between FxP and FlP is not obvious and strongly depends on the application considered. In some cases, low-precision floating-point arithmetic can be the most effective and provides some benefits over the classical fixed-point choice for energy-constrained applications.
translated by 谷歌翻译
With the growth of residential rooftop PV adoption in recent decades, the problem of 1 effective layout design has become increasingly important in recent years. Although a number 2 of automated methods have been introduced, these tend to rely on simplifying assumptions and 3 heuristics to improve computational tractability. We demonstrate a fully automated layout design 4 pipeline that attempts to solve a more general formulation with greater geometric flexibility that 5 accounts for shading losses. Our approach generates rooftop areas from satellite imagery and uses 6 MINLP optimization to select panel positions, azimuth angles and tilt angles on an individual basis 7 rather than imposing any predefined layouts. Our results demonstrate that although several common 8 heuristics are often effective, they may not be universally suitable due to complications resulting 9 from geometric restrictions and shading losses. Finally, we evaluate a few specific heuristics from the 10 literature and propose a potential new rule of thumb that may help improve rooftop solar energy 11 potential when shading effects are considered.
translated by 谷歌翻译